A gradient descent rule for spiking neurons emitting multiple spikes

نویسندگان

  • Olaf Booij
  • Hieu Tat Nguyen
چکیده

A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hairtrigger situation.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Temporal Pattern Classification using Spiking Neural Networks

A novel supervised learning-rule is derived for Spiking Neural Networks (SNNs) using the gradient descent method, which can be applied on networks with a multi-layered architecture. All existing learning-rules for SNNs limit the spiking neurons to fire only once. Our algorithm however is specially designed to cope with neurons that fire multiple spikes, taking full advantage of the capabilities...

متن کامل

Gradient Descent for Spiking Neural Networks

Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here,...

متن کامل

Learning Precise Spike Train-to-Spike Train Transformations in Multilayer Feedforward Neuronal Networks

We derive a synaptic weight update rule for learning temporally precise spike train-to-spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models...

متن کامل

Gradient Learning in Networks of Smoothly Spiking Neurons

A slightly simplified version of the Spike Response Model SRM0 of a spiking neuron is tailored to gradient learning. In particular, the evolution of spike trains along the weight and delay parameter trajectories is made perfectly smooth. For this model a back-propagation-like learning rule is derived which propagates the error also along the time axis. This approach overcomes the difficulties w...

متن کامل

On the Algorithmic Power of Spiking Neural Networks

Spiking Neural Networks (SNN) are mathematical models in neuroscience to describe the dynamics among a set of neurons which interact with each other by firing spike signals to each other. Interestingly, recent works observed that for an integrate-and-fire model, when configured appropriately (e.g., after the parameters are learned properly), the neurons’ firing rate, i.e., the average number of...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Inf. Process. Lett.

دوره 95  شماره 

صفحات  -

تاریخ انتشار 2005